基于草图的图像检索(SBIR)是检索与语义和手绘草图查询的空间配置相匹配的自然图像(照片)的任务。草图的普遍性扩大了可能的应用程序的范围,并增加了对有效SBIR解决方案的需求。在本文中,我们研究了经典的基于三胞胎的SBIR解决方案,并表明对水平翻转(即使在模型登录之后)的持续不变性也损害了性能。为了克服这一限制,我们提出了几种方法,并深入评估它们每个方法以检查其有效性。我们的主要贡献是双重的:我们提出并评估几种直观的修改,以构建具有更好的翻转均衡性的SBIR解决方案。我们表明,视觉变压器更适合SBIR任务,并且它们的优于CNN的优于较大的CNN。我们进行了许多实验,并引入了第一个模型,以优于大规模SBIR基准(粗略)的人类表现。与以前的最新方法相比,我们的最佳模型在粗略的基准测试中达到了62.25%(在k = 1)的召回率为46.2%。
translated by 谷歌翻译
在过去几年中,神经字符动画已经出现并提供了一种动画虚拟字符的自动方法。它们的运动由神经网络合成。用用户定义的控制信号实时控制该运动也是视频游戏中的重要任务。基于全连接层(MLP)和专家混合物(MOE)的解决方案已经令人印象深刻的导致产生和控制环境与虚拟字符之间的近距离相互作用的各种运动。然而,完全连接层的主要缺点是它们的计算和内存成本,可能导致子优化的解决方案。在这项工作中,我们在交互式角色动画的背景下应用修剪算法以压缩MLP-Moe神经网络,这降低了其参数的数量,并在该加速度和合成的运动质量之间进行权衡加速其计算时间。这项工作表明,通过相同数量的专家和参数,修剪模型产生的运动伪像比密集模型更少,并且学习的高级运动功能对于两者相似
translated by 谷歌翻译
视觉关注估计是不同学科的十字路口的一个积极的研究领域:计算机视觉,人工智能和医学。估计表示关注的显着图的最常见方法之一是基于观察到的图像。在本文中,我们表明可以从EEG采集中检索视觉注意力。结果与观察到的图像的传统预测相当,这具有很大的兴趣。为此目的,已经记录了一组信号,并且已经开发出不同的模型来研究视觉关注与大脑活动之间的关系。结果令人鼓舞,与其他方式的其他方法令人鼓舞,与其他方式相比。本文考虑的代码和数据集已在\ URL {https://figshare.com/s/3e353bd1c621962888AD}中提供,以促进该领域的研究。
translated by 谷歌翻译
情感估计是一个积极的研究领域,对人与计算机之间的互动产生了重要影响。在评估情绪的不同方式中,代表电脑活动的脑电图(EEG)在过去十年中呈现了激励结果。 EEG的情感估计可以有助于某些疾病的诊断或康复。在本文中,我们提出了一种考虑到专家定义的生理学知识,与最初致力于计算机视觉的新型深度学习(DL)模型。具有模型显着性分析的联合学习得到了增强。为了呈现全局方法,该模型已经在四个公共可用数据集中进行了评估,并实现了与TheS-of TheakeS的方法和优于两个所提出的数据集的结果,其具有较低标准偏差的较高的稳定性。为获得再现性,本文提出的代码和模型可在Github.com/vdelv/emotion-eeg中获得。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.
translated by 谷歌翻译
We present a novel neural model for modern poetry generation in French. The model consists of two pretrained neural models that are fine-tuned for the poem generation task. The encoder of the model is a RoBERTa based one while the decoder is based on GPT-2. This way the model can benefit from the superior natural language understanding performance of RoBERTa and the good natural language generation performance of GPT-2. Our evaluation shows that the model can create French poetry successfully. On a 5 point scale, the lowest score of 3.57 was given by human judges to typicality and emotionality of the output poetry while the best score of 3.79 was given to understandability.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have been successfully applied in many applications in computer sciences. Despite the success of deep learning architectures in other domains, deep GNNs still underperform their shallow counterparts. There are many open questions about deep GNNs, but over-smoothing and over-squashing are perhaps the most intriguing issues. When stacking multiple graph convolutional layers, the over-smoothing and over-squashing problems arise and have been defined as the inability of GNNs to learn deep representations and propagate information from distant nodes, respectively. Even though the widespread definitions of both problems are similar, these phenomena have been studied independently. This work strives to understand the underlying relationship between over-smoothing and over-squashing from a topological perspective. We show that both problems are intrinsically related to the spectral gap of the Laplacian of the graph. Therefore, there is a trade-off between these two problems, i.e., we cannot simultaneously alleviate both over-smoothing and over-squashing. We also propose a Stochastic Jost and Liu curvature Rewiring (SJLR) algorithm based on a bound of the Ollivier's Ricci curvature. SJLR is less expensive than previous curvature-based rewiring methods while retaining fundamental properties. Finally, we perform a thorough comparison of SJLR with previous techniques to alleviate over-smoothing or over-squashing, seeking to gain a better understanding of both problems.
translated by 谷歌翻译
We present a novel approach to generating news headlines in Finnish for a given news story. We model this as a summarization task where a model is given a news article, and its task is to produce a concise headline describing the main topic of the article. Because there are no openly available GPT-2 models for Finnish, we will first build such a model using several corpora. The model is then fine-tuned for the headline generation task using a massive news corpus. The system is evaluated by 3 expert journalists working in a Finnish media house. The results showcase the usability of the presented approach as a headline suggestion tool to facilitate the news production process.
translated by 谷歌翻译
We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: \textit{anger, disgust, fear, happy, neutral, pained, sad } and \textit{surprised}. The game has been translated into English, Spanish, German, French and Italian. We conduct experiments on multilingual, multilabel sentiment analysis on the extracted data set using multilingual BERT, XLMRoBERTa and language specific BERT models. In our experiments, multilingual BERT outperformed XLMRoBERTa for most of the languages, also language specific models were slightly better than multilingual BERT for most of the languages. The best overall accuracy was 54\% and it was achieved by using multilingual BERT on Spanish data. The extracted data set presents a challenging task for sentiment analysis. We have released the data, including the testing and training splits, openly on Zenodo. The data set has been shuffled for copyright reasons.
translated by 谷歌翻译
Word order, an essential property of natural languages, is injected in Transformer-based neural language models using position encoding. However, recent experiments have shown that explicit position encoding is not always useful, since some models without such feature managed to achieve state-of-the art performance on some tasks. To understand better this phenomenon, we examine the effect of removing position encodings on the pre-training objective itself (i.e., masked language modelling), to test whether models can reconstruct position information from co-occurrences alone. We do so by controlling the amount of masked tokens in the input sentence, as a proxy to affect the importance of position information for the task. We find that the necessity of position information increases with the amount of masking, and that masked language models without position encodings are not able to reconstruct this information on the task. These findings point towards a direct relationship between the amount of masking and the ability of Transformers to capture order-sensitive aspects of language using position encoding.
translated by 谷歌翻译